Abstract

This research addresses the complex challenge of recognizing hand gestures irrespective of the user's body posture, a crucial issue in medical treatment for people with speech impairments and human-machine interfaces where precise gesture interpretation is vital. The aim is to engineer an advanced hand gesture recognition system, effective across various body positions and camera viewpoints. A novel flexible camera arrangement was employed, integrating a CNN-Transformer hybrid model, leveraging the strengths of Convolutional Neural Networks and the self-attention mechanism of Transformers. Developed using Python and the PyTorch deep learning framework, the system focuses on sophisticated image processing techniques. A thorough literature review on gesture recognition systems and multi-view analysis was conducted to inform the development. The system demonstrated exceptional accuracy in recognizing hand gestures in diverse body postures and from multiple camera perspectives, significantly outperforming existing methods. It marked a significant advancement in decoding complex gestures, a key aspect for medical applications and intricate human-machine is primarily interactions. This beneficial for people with speech impairments, rehabilitation, and in human-machine interfaces, poised to revolutionize patient care and enhance interaction with advanced machinery and computer systems.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call