Abstract
Sign language is an indispensable communication means for deaf-mute people because of their hearing impairment. At present, sign language is not a popular communication method among hearing people, so that the majority of the hearing is not willing to have a talk with the deaf-mute, or they have to spend much time and energy trying to figure out what the correct meaning is. Sign Language Recognition (SLR), which aims to translate sign language to people who know few about it in the form of text or speech, can be said to be a great help to deaf-mute and hearing people communicate. In this study, a real-time vision-based static hand gesture recognition system for sign language was developed. All data is collected from a USB camera connected to a computer, and no auxiliary items (such as gloves) were required. The proposed system is based on a skin color algorithm in HSV color space to find the Region of Interest (ROI), where hand gesture is. After completing all pre-processing work, 8 features were extracted from each sample using Principal Component Analysis (PCA). The recognition machine learning approach used was based on Support Vector Machine (SVM). The experimental results show that this system can distinguish B, D, F, L and U, these five American sign language hand gestures, with the success rate of about 99.4%.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.