Abstract

Sign language and spoken language, both of them are generated from the same human brain but their linguistic and physical transmission varies greatly from each other. The deaf people have a hearing disability and the dumb or mute people have a speaking disability except other things are normal for them. Persons having hearing and speaking impairment are often incapable of communicating their statements appropriately. So they use sign language to communicate with each other and with the rest of the world. As a result, sign language recognition (SLR) has become one of the most interesting topics in computer vision and machine learning recently. Researchers are trying to improve this language to use in a large-scale though it is not an international language. This paper proposed a model to recognize Bangla sign language gestures using convolutional neural networks (CNN) to support Bengali natives. A large publicly available sign language dataset was used for detecting Bangla Sign Language (BSL). CNN was used to recognize and classify hand image in the screen and then to categorize the hand skeletal features extracted from the image into a standard communicative meaning for all. The proposed method shows 98.75% accuracy.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.