Abstract

ABSTRACT In this generation, deep learning techniques are widely used for sign language prediction. In this paper, a deep learning model is proposed for American sign language detection using webcam images and transfer learning. The particular model is designed for a real-time sign language detection. The author claims 98% of accuracy for this designed model, when trained with a total of 15 images for each gesture. Jupyter notebook is used as the environment for working out this research. Cuda, cudnn graphic processor upgraders are also utilised in this research for training the model. To make real-time detections easy, a local environment is used rather than a cloud system for implementing the code. The main aim of this research work is to create a model in order to identify and detect the sign language alphabets and some very frequently used gestures. The model is designed based on deep learning by using the convolutional neural networks and single shot detector algorithm to surpass the difficulty that is faced by the speech impaired and normal people.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.