Abstract

Communicating with the person having hearing disability is always a major challenge. The work presented in paper is an exertion(extension) towards examining the difficulties in classification of characters in Indian Sign Language(ISL). Sign language is not enough for communication of people with hearing ability or people with speech disability. The gestures made by the people with disability gets mixed or disordered for someone who has never learnt this language. Communication should be in both ways. In this paper, we introduce a Sign Language recognition using Indian Sign Language.The user must be able to capture images of hand gestures using a web camera in this analysis, and the system must predict and show the name of the captured image. The captured image undergoes series of processing steps which include various Computer vision techniques such as the conversion to gray-scale, dilation and mask operation. Convolutional Neural Network (CNN) is used to train our model and identify the pictures. Our model has achieved accuracy about 95%

Highlights

  • IntroductionDeaf and dumb peoples communicate with one another using sign language, but it is difficult for non-deaf and dumb people to understand them

  • One of the most important requirements for social survival is communication

  • Deaf and dumb peoples communicate with one another using sign language, but it is difficult for non-deaf and dumb people to understand them

Read more

Summary

Introduction

Deaf and dumb peoples communicate with one another using sign language, but it is difficult for non-deaf and dumb people to understand them. ISL communicates with two hands (20 out of 26), while ASL communicates with a single hand. This paper aims to take the first step in using Indian sign language to bridge the communication gap between normal people and deaf people. The extension of this project to words and common phrases will make it easier for deaf and dumb people to communicate with the outside world, but it may help in the development of autonomous systems for understanding and assisting them

Objectives
Methods
Findings
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call