Abstract

In artificial intelligence applications, advanced computational models, such as deep learning, are employed to achieve high accuracy, often requiring the execution of numerous operations. Conversely, lightweight computational models are typically more resource-efficient, making them suitable for various devices, including smartphones, tablets, and wearable technology. This paper presents an ultra-low-computation solution for interpreting sign languages to assist deaf and hard-of-hearing individuals without needing specialized hardware or significant computational resources. The proposed approach initially performs data abstraction on the input data. During this process, the image is systematically scanned from various perspectives, and the collected information is then encoded into a one-dimensional vector. Subsequently, the abstracted information undergoes processing through a Fully Connected Neural Network (FCN), resulting in highly accurate output. We also introduced two abstraction methods, namely Opaque and Glass, inspired by the interaction of light with different types of objects. The proposed abstractions facilitate the comprehension of the hand gesture’s outer boundary as well as its row-wise and column-wise density of pixels. Our experiments on three datasets confirm the efficiency of the proposed method, achieving an accuracy of 99.4% in recognizing American Sign Language, 99.96% accuracy in recognizing Indian Sign Language, and 99.95% accuracy in recognizing Bangla Sign Language. Notably, the model size and the number of MAC operations are significantly smaller than state-of-the-art computational models trained on the same datasets.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call