Abstract

AbstractSign language is now the only means of communication for deaf or hard-of-hearing people. SLR has been the trendiest research area in recent years. Determining an accurate result from many photo data sets is crucial for an automatic sign language recognition system. Regardless, convolutional neural networks (CNNs) outperform traditional neural networks in a wide range of visual tasks. Improved results in numerous measures, including as accuracy, recall, and FI score, are required for many resources and methodologies. In this context, we present convolutional neural network model approach with multi-scale layers and various filters that serve as an additional layer in the design. This model could be used on a variety of data sets and is also useful for real-time picture streaming. Two self-created data sets were utilized to test the performance of the proposed model: ASL and ISL data sets. Transfer learning models like as VGG16, MobileNet, VGG19, MobileNetV2, ResNet50, and InceptionV3 are also used to evaluate these data sets. The experimental findings reveal that our suggested model performs better than the basic CNN model, with an accuracy of 98% using ISL data set and 95% using ASL data set.KeywordsAmerican sign language (ASL)Convolutions neural network (CNN)Indian sign language (ISL)Sign language recognition (SLR)

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call