Abstract

Sign language is for deaf and mute people. Population as large as India's provides for an enormous section of people using Indian sign language to communicate. Unfortunately, a major chunk of the population does not understand sign language which limits communication between those with disabilities and wider population. We are proposing a system to harness this communication chasm. First the images are taken from webcam in RGB color space. Preprocessing and semantic segmentation are applied on the input image. By employing a simple background, the RGB segmented image is transformed to gray scale and background noise is removed. Otsu's segmentation method is used to segment the image. Convolutional Neural Networks were employed, utilizing skin segmented hand images as the input. On the training data, a classification accuracy of 99.33% was attained for 36 static hand gestures from Indian Sign Language. The above mentioned model performed significantly well in real time implementation.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.