Abstract

Extensive efforts have been devoted to human gesture recognition with radio frequency (RF) signals. However, their performance degrades when applied to novel gesture classes that have never been seen in the training set. To handle unseen gestures, extra efforts are inevitable in terms of data collection and model retraining. In this article, we present XGest, a cross-label gesture recognition system that can accurately recognize gestures outside of the predefined gesture set with zero extra training effort. The key insight of XGest is to build a knowledge transfer framework between different gesture datasets. Specifically, we design a novel deep neural network to embed gestures into a high-dimensional Euclidean space. Several techniques are designed to tackle the spatial resolution limits imposed by RF hardware and the specular reflection effect of RF signals in this model. We implement XGest on a commodity mmWave device, and extensive experiments have demonstrated the significant recognition performance.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call