Abstract

Communication is integral to every human’s life, allowing individuals to express themselves and understand each other. This process can be challenging for the hearing-impaired population, who rely on sign language for communication due to the limited number of individuals proficient in sign language. Image classification models can be used to create assistive systems to address this communication barrier. This paper conducts a comprehensive literature review and experiments to find the state of the art in sign language recognition. It identifies a lack of research in Norwegian Sign Language (NSL). To address this gap, we created a dataset from scratch containing 24,300 images of 27 NSL alphabet signs and performed a comparative analysis of various machine learning models, including the Support Vector Machine (SVM), K-Nearest Neighbor (KNN), and Convolutional Neural Network (CNN) on the dataset. The evaluation of these models was based on accuracy and computational efficiency. Based on these metrics, our findings indicate that SVM and CNN were the most effective models, achieving accuracies of 99.9% with high computational efficiency. Consequently, the research conducted in this report aims to contribute to the field of NSL recognition and serve as a foundation for future studies in this area.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call