Abstract

Gesture recognition using WiFi is vital for human–computer interaction, smart homes, smart spaces, and elderly care. WiFi signals are non-stationary, sensitive to the environment, and traditional pattern recognition-based HAR methods incur exorbitant training and deployment costs due to their reliance on the quality and quantity of training data. While there has been extensive research on enhancing HAR techniques, the efficiency of the system is still limited by the scarcity of training data and identity-action feature coupling. This research focuses on attaining gesture recognition across multiple users with a limited number of samples. Our findings show that the conventional data augmentation methods are incapable of facilitating the system in attaining sample diversity. As a result, we propose a training-free augmentation strategy as a means to provide adequate training data. Unlike the conventional data enhancement approach, this scheme aims to develop data processing methods that differentiate between various samples in practical applications. Consequently, it enhances data pertinent to specific HAR assignments effectively. To effectively extract action features in WiFi samples, an unsupervised cross-user domain sample generation (CUDSG) model is proposed. This model generates virtual gesture samples for new user domains by decoupling and recombining gesture and identity features. This extends the sensing boundary of the system to new user domains without requiring a significant number of users to participate. Model performance was evaluated using various classifiers, such as SVM, KNN and CNN. The results demonstrate a significant improvement in average classification accuracy from 57.3% to 98.4%. This indicates that CUDSG is a highly effective tool for enhancing the performance of existing gesture recognition techniques.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call