Abstract

Hearing and speech impaired people use Sign Language to convey their message to normal people. Sign Language has evolved as one of the major areas of research and study in computer vision. Researchers in sign language recognition used different input devices such as data gloves, web camera, depth camera, color camera, Microsoft's Kinect sensor, etc. to capture hand signs. In this paper we display the importance of American Sign Language and proposed technique for classification and their efficient results. American Sign Language uses only one hand to display the gestures and thus makes it easy for interpretation and understanding. The signs are captured using new digital sensor called “Leap Motion Controller”. LMC is 3D non-contact motion sensor which can tracks and detects hands, fingers, bones and finger-like objects. Proposed system used Multi-Layer Perceptron (MLP) neural network with Back Propagation (BP) algorithm to build a classification model which takes feature set as input. Multi-Layer Perceptron (MLP) neural network used to recognize different signs. We have considered 26 different alphabets of American Sign Language. Multi-Layer Perceptron (MLP) is executed on a dataset of total 520 samples (consisting of 20 samples of each alphabet). Recognition rate of proposed system is 96.15%.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call