Abstract
Sign language is a great visual communication technique for those who have auditory or speech impairments. The deaf and dumb people have long relied on sign language recognition (SLR) to communicate and integrate into society. This research uses Indian Sign Language to identify elementary sign-language gestures in images/videos and compares machine language methods. Images are pre-processed and feature-extracted to improve the performance of deployed models. The goal is to create a system that uses an efficient classifier to deliver reliable hand sign-language gesture recognition. For the recognition of the Indian Sign Language dataset for the Sign Language Translation and Recognition ISL-CSLTR database, the accuracy and precision of classification methods are analysed and compared. When compared to the decision tree and KNN models, the Random Forest model had a greater accuracy of 84% and 83% precision. We also got 77% Recall and a 0.7 F Score according to this study. The tool used for evaluating our work in python.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: International Journal of System of Systems Engineering
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.