Abstract

AbstractGesture recognition is an important step to realize ubiquitous WiFi-based human-computer interaction. However, most current WiFi-based gesture recognition systems rely on domain-specific training. To address this issue, we propose an attention-based cross-domain gesture recognition system using WiFi channel state information. In order to overcome the shortcoming of handcrafted feature extraction in state-of-the-art cross-domain models, our model uses the attention mechanism to automatically extract domain-independent gesture features from spatial and temporal dimensions. We implement the model and extensively evaluate its performance by using the Widar3 dataset involving 16 users and 6 gestures across 5 orientations and 5 positions in 3 different environments. The evaluation results show that, the average in-domain gesture recognition accuracy achieved by the model is 99.67% and the average cross-domain gesture recognition accuracies are 96.57%, 97.86% and 94.2%, respectively, in terms of rooms, positions and orientations. Its cross-domain gesture recognition accuracy significantly outperforms state-of-the-art methods.KeywordsCross-domainGesture recognitionChannel state informationAttention mechanismCommodity WiFi

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.