The Arabic sign language (ArSL) has witnessed ground-breaking research activities to identify hand gestures and signs through the deep learning (DL) model. SL is a unique communication tool that bridges the gaps between people with hearing impairment and ordinary people. The ArSL recognition system is of immense importance for different groups of people since it enables individuals with hearing impairment to communicate effectively. In SLs, signs are characterized by discrepancies in hand positions, shapes, motions, body parts, and facial expressions, posing a crucial threat to visual recognition in computer vision (CV). An automated sign detection technique needs two primary courses of action: the recognition of specific features and the classification of the input dataset. Previously, several approaches for detecting and classifying SLs had been proposed. In this study, an Improved Metaheuristics with Transfer Learning based Arabic Sign Language Identification System (IMTL-ArSL) method is developed. The primary objective of the IMTL-ArSL method is to detect and classify the existence of various signs in the Arabic language. In the IMTL-ArSL model, the bilateral filtering (BF) model is initially used for preprocessing. Besides, the Residual Network (ResNet50v2) model is applied for extracting features, and an improved arithmetic optimization algorithm (IAOA) model is utilized for hyperparameter tuning. Finally, a gated recurrent unit (GRU) network is exploited to identify sign languages. The empirical analysis of the IMTL-ArSL approach is tested using a benchmark dataset. The experimental values of the IMTL-ArSL approach portrayed a superior accuracy value of 93.87% over other techniques.
Read full abstract