Abstract

Sign language is the language of hearing and speech impaired people. And Hand gesture is key method used in sign language by hearing and speech impaired people to communicate with themselves as well as with normal people. Hence in this paper introduced software which presents a system prototype that is able to automatically recognize sign language to help people who suffer from speech and hearing problems, so they can communicate more effectively with each other or normal people. By developing such real time system can enables differently impaired people to communicate among themselves without help of interpreter. The Key steps involved while designing the system are: data acquisition, image segmentation, hand tracking, feature extraction and gesture recognition. Basically there are two main sign language recognition methods image-based and sensor-based. But in this paper image based approach has used which deals with sign language gestures to identify the signs and convert them to text as well as speech.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.