Gesture recognition for visually challenged people plays a vital role in improving their convenience and interaction with digital gadgets and environments. It includes improvement of systems that permit them to relate with digital devices by using hand actions or gestures. To improve user-friendliness, these systems select in-built and effortlessly learnable gestures, often integrating wearable devices prepared with sensors for precise detection. Incorporating auditory or haptic feedback devices offers real-time cues about achievement of familiar gestures. Machine learning (ML) and deep learning (DL) methods are useful tools for accurate gesture detection, with customization choices to accommodate individual preferences. In this view, this article concentrates on design and development of Automated Gesture Recognition using Zebra Optimization Algorithm with Deep Learning (AGR-ZOADL) model for Visually Challenged People. The AGR-ZOADL technique aims to recognize the gestures to aid visually challenged people. In the AGR-ZOADL technique, the primary level of data pre-processing is involved by median filtering (MF). Besides, the AGR-ZOADL technique applies NASNet model to learn complex features from the preprocessed data. To enhance performance of NASNet technique, ZOA based hyperparameter procedure performed. For gesture recognition process, stacked long short term memory (SLSTM) model is applied. The performance validation of AGR-ZOADL technique carried out using a benchmark dataset. The experimental values stated that AGR-ZOADL methodology extents significant performance over other present approaches
Read full abstract