Abstract
Hard of hearing and unable to speak individuals convey among themselves utilizing gesture based communication yet typical individuals think that it’s hard to comprehend their language. Utilizing two hands frequently prompts lack of definition of highlights because of covering of hands. Our undertaking goes for making the essential stride in crossing over the correspondence hole between typical individuals and tragically challenged individuals utilizing Sign language. Powerful augmentation of this undertaking to words and typical statements may not just cause the deaf and dumb individuals to impart quicker and simpler with external world, yet in addition give a lift in creating self-sufficient frameworks for comprehension and supporting them. Gesture based communication is the favored technique for correspondence among the hard of hearing and the meeting debilitated individuals everywhere throughout the world. Acknowledgment of communication via gestures can have shifting level of achievement when utilized in a computer vision or some other techniques. Communication via gestures is said to have an organized arrangement of signals where each motion is having a particular significance. We propose a solution to this problem as SNCHAR will allow easy interaction between the deaf and the hearing impaired people and the ones who are not. Here SN stands for Sign language, CHA for Character, and R for Recognition system. The project "SNCHAR: Sign language Character Recognition" system is a python based application. It uses live video as input, and predicts the letters the user is gesturing in the live feed. It captures the frames, and recognizes the area of hand gesture by looking for skin color intensity object. It separates the gesture area from the rest of the frame, and feeds that part to our pre-trained model. This pre-trained model, using the hand gesture as input predicts a value that represents an alphabet. This alphabet is displayed on the screen. User can hear the text predicted on the screen by pressing “P” on the keyboard. The predicted text can be erased if required by using “Z” from the keyboard
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: International Journal of Recent Technology and Engineering (IJRTE)
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.