Abstract

Sign language recognition can be considered as an effective solution for disabled people to communicate with others. It helps them in conveying the intended information using sign languages without any challenges. Recent advancements in computer vision and image processing techniques can be leveraged to detect and classify the signs used by disabled people in an effective manner. Metaheuristic optimization algorithms can be designed in a manner such that it fine tunes the hyper parameters, used in Deep Learning (DL) models as the latter considerably impacts the classification results. With this motivation, the current study designs the Optimal Deep Transfer Learning Driven Sign Language Recognition and Classification (ODTL-SLRC) model for disabled people. The aim of the proposed ODTL-SLRC technique is to recognize and classify sign languages used by disabled people. The proposed ODTL-SLRC technique derives EfficientNet model to generate a collection of useful feature vectors. In addition, the hyper parameters involved in EfficientNet model are fine-tuned with the help of HGSO algorithm. Moreover, Bidirectional Long Short Term Memory (BiLSTM) technique is employed for sign language classification. The proposed ODTL-SLRC technique was experimentally validated using benchmark dataset and the results were inspected under several measures. The comparative analysis results established the superior performance of the proposed ODTL-SLRC technique over recent approaches in terms of efficiency.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call