Abstract
The work primarily focuses on addressing the contemporary challenge of hand gesture recognition, driven by the overarching objectives of revolutionizing military training methodologies, enhancing human-machine interactions, and facilitating improved communication between individuals with disabilities and machines. In-depth scrutiny of the methods for hand gesture recognition involves a comprehensive analysis, encompassing both established historical computer vision approaches and the latest deep learning trends available in the present day. This investigation delves into the fundamental principles that underpin the design of models utilizing 3D convolutional neural networks and visual transformers. Within the 3D-CNN architecture that was analyzed, a convolutional neural network with two convolutional layers and two pooling layers is considered. Each 3D convolution is obtained by convolving a 3D filter kernel and summing multiple adjacent frames to create a 3D cube. The visual transformer architecture that is consisting of a visual transformer with Linear Projection, a Transformer Encoder, and two sub-layers: the Multi-head Self-Attention (MSA) layer and the feedforward layer, also known as the Multi-Layer Perceptron (MLP), is considered. This research endeavors to push the boundaries of hand gesture recognition by deploying models trained on the ASL and NUS-II datasets, which encompass a diverse array of sign language images. The performance of these models is assessed after 20 training epochs, drawing insights from various performance metrics, including recall, precision, and the F1 score. Additionally, the study investigates the impact on model performance when adopting the ViT architecture after both 20 and 40 training epochs were performed. This analysis unveils the scenarios in which 3D convolutional neural networks and visual transformers achieve superior accuracy results. Simultaneously, it sheds light on the inherent constraints that accompany each approach within the ever-evolving landscape of environmental variables and computational resources. The research identifies cutting-edge architectural paradigms for hand gesture recognition, rooted in deep learning, which hold immense promise for further exploration and eventual implementation and integration into software products.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.