Abstract

According to the World Health organization, 5% of the world population, approximately 466 million people, is deaf and/or mute or has disabling hearing loss. There is often a wall of distinction between handicapped people and normal people. We communicate to share our thoughts but for a disabled person (mainly deaf and dumb), it becomes difficult to communicate. Inability to speak is considered to be a true form of disability. For such people, sign language or Braille is the only means of communication. Sign Language is a way of communication using hand gestures. However, it becomes difficult for them to communicate with others as most don’t understand sign language. Hence, we aim at bridging this communication gap, between a deaf/mute person and others, by developing a system which acts as a mediator between both. We propose a hand gesture recognition system which works in 4 steps: 1. Generate a live stream of hand gestures using web-cam. 2. Form images from the video using video frames. 3. Preprocess these images. 4. Recognize sign language hand-gestures and convert into text/audio output. The system is implemented using the concepts of image processing and neural networks. We have tested the proposed models using Kaggle dataset, our dataset and a dataset formed after combining both. We propose to eliminate the ambiguity introduced in the results by inculcating variation in the background. Most of the models give similar accuracy of the test results for both plain and cluttered background.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call