Abstract
Sign Language Recognition is one of the most researched topics in the field of computer vision. Hearing-impaired people use sign languages to communicate with the rest of society. In this paper, a simple Convolutional Neural Network(CNN) having less number of operations has been developed to recognize the Indian Sign Language(ISL) alphabet. We have collected a dataset consisting of 24 classes in alphabet to train and test the proposed network. This dataset contains a total of 1502 images of hand gestures, segmented from larger frames containing signers. The Indian Sign Language Dataset which is publicly available on Kaggle containing 26 alphabet letters and 9 digits is also used in the work. Testing accuracy of 99.67% was obtained for the collected dataset and 99.86% testing accuracy was obtained for 24 alphabet letters in Kaggle Dataset. The performance of the proposed model is compared with standard CNN models like InceptionV3 and VGG16. The InceptionV3 network on the collected Dataset gives a testing accuracy of 98.67% and the VGG16 model gives 99%. Thus the developed CNN Model could achieve higher accuracy than InceptionV3 and VGG16 models which are considered to be one of the best models for static sign language recognition.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Similar Papers
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.