Abstract

In artificial intelligence applications, advanced computational models, such as deep learning, are employed to achieve high accuracy, often requiring the execution of numerous operations. Conversely, lightweight computational models are typically more resource-efficient, making them suitable for various devices, including smartphones, tablets, and wearable technology. This paper presents an ultra-low-computation solution for interpreting sign languages to assist deaf and hard-of-hearing individuals without needing specialized hardware or significant computational resources. The proposed approach initially performs data abstraction on the input data. During this process, the image is systematically scanned from various perspectives, and the collected information is then encoded into a one-dimensional vector. Subsequently, the abstracted information undergoes processing through a Fully Connected Neural Network (FCN), resulting in highly accurate output. We also introduced two abstraction methods, namely Opaque and Glass, inspired by the interaction of light with different types of objects. The proposed abstractions facilitate the comprehension of the hand gesture’s outer boundary as well as its row-wise and column-wise density of pixels. Our experiments on three datasets confirm the efficiency of the proposed method, achieving an accuracy of 99.4% in recognizing American Sign Language, 99.96% accuracy in recognizing Indian Sign Language, and 99.95% accuracy in recognizing Bangla Sign Language. Notably, the model size and the number of MAC operations are significantly smaller than state-of-the-art computational models trained on the same datasets.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.