Abstract

Hand gestures are one of the nonverbal communication modalities used in sign language. It is most often used by deaf and/or dumb individuals who have hearing or speech impairments to communicate with other deaf and/or dumb people or with normal people. Many manufacturers across the world have created various sign language systems; however they are neither adaptable nor cost- effective for end users. To address this, a model has been developed that provides a system prototype that can detect sign language automatically, allowing deaf and dumb individuals to transfer the message more successfully with normal people. The static visuals were processed with a convolutional neural network and the feature extraction approach, and each sign was trained with ten samples. Image processing methods are used to determine the fingertip location of static pictures and convert them to text. The suggested technique can recognize the signer's pictures that are taken in real time during the testing phase. The results demonstrate that the suggested Sign Language Recognition System is capable of accurately recognizing pictures.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.