Abstract

ABSTRACT In this generation, deep learning techniques are widely used for sign language prediction. In this paper, a deep learning model is proposed for American sign language detection using webcam images and transfer learning. The particular model is designed for a real-time sign language detection. The author claims 98% of accuracy for this designed model, when trained with a total of 15 images for each gesture. Jupyter notebook is used as the environment for working out this research. Cuda, cudnn graphic processor upgraders are also utilised in this research for training the model. To make real-time detections easy, a local environment is used rather than a cloud system for implementing the code. The main aim of this research work is to create a model in order to identify and detect the sign language alphabets and some very frequently used gestures. The model is designed based on deep learning by using the convolutional neural networks and single shot detector algorithm to surpass the difficulty that is faced by the speech impaired and normal people.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call